model architecture
- North America > United States > Pennsylvania (0.04)
- Asia > Middle East > Israel (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.67)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (0.93)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.68)
- North America > United States (0.28)
- Europe > Switzerland (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.67)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area (0.93)
- Health & Medicine > Diagnostic Medicine > Imaging (0.47)
- South America > Brazil (0.04)
- North America > United States (0.04)
- North America > Canada (0.04)
- (6 more...)
- Information Technology (0.67)
- Law (0.67)
- Government (0.46)
Label Poisoning is All You Need
In a backdoor attack, an adversary injects corrupted data into a model's training dataset in order to gain control over its predictions on images with a specific attacker-defined trigger. A typical corrupted training example requires altering both the image, by applying the trigger, and the label. Models trained on clean images, therefore, were considered safe from backdoor attacks. However, in some common machine learning scenarios, the training labels are provided by potentially malicious third-parties. This includes crowd-sourced annotation and knowledge distillation. We, hence, investigate a fundamental question: can we launch a successful backdoor attack by only corrupting labels?
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- (3 more...)
- Workflow (0.68)
- Research Report > New Finding (0.46)
- North America > United States (0.04)
- North America > Canada (0.04)
- Europe > Spain (0.04)
- (3 more...)
Appendix 545 A Details of datasets and architectures 546 A.1 Object Detection Image Dataset
We evaluate our method on three well-known model architectures:, i.e., SSD [ Named Entity Recognition, and Question Answering. Find more details in Table 5. Recall, ROC-AUC, and Average Scanning Overheads for each model. A value of 1 indicates perfect classification, while a value of 0.5 indicates To the best of our knowledge, there is no existing detection methods for object detection models. We evaluate the IoU threshold used to calculate the ASR of inverted triggers. However, a threshold of 0.7 tends to degrade the Different score thresholds are tested when computing the ASR of inverted triggers.
- North America > United States > Indiana > Tippecanoe County > West Lafayette (0.04)
- North America > United States > Indiana > Tippecanoe County > Lafayette (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- Asia > Middle East > Jordan (0.04)